Skip to content

Conversation

shiyang-weng
Copy link
Contributor

@shiyang-weng shiyang-weng commented Aug 5, 2025

Implemented FP8 QEmbeddingBag on CPU, currently supporting:

  • include_last_offset=True
  • mode="sum"

Next steps

  1. expand supported modes.
  2. Use fp8 instructions instead

Copy link

pytorch-bot bot commented Aug 5, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2686

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

✅ No Failures

As of commit df09264 with merge base 9056c46 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@shiyang-weng shiyang-weng marked this pull request as draft August 5, 2025 01:37
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 5, 2025
"CPU" not in torch._C._dispatch_dump("torchao::qembeddingbag"),
reason="cpp kernels not built",
)
def test_embeddingbag_cpu(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the test should be added here I think: https://github.com/pytorch/ao/blob/main/test/test_ops.py

@shiyang-weng

This comment was marked as outdated.

Copy link

pytorch-bot bot commented Aug 7, 2025

❌ 🤖 pytorchbot command failed:

@pytorchbot: error: argument command: invalid choice: 'topic: new feature' (choose from 'merge', 'revert', 'rebase', 'label', 'drci', 'cherry-pick')

usage: @pytorchbot [-h] {merge,revert,rebase,label,drci,cherry-pick} ...

Try @pytorchbot --help for more info.

@shiyang-weng
Copy link
Contributor Author

@pytorchbot label "topic: new feature"

@pytorch-bot pytorch-bot bot added the topic: new feature Use this tag if this PR adds a new feature label Aug 7, 2025
@shiyang-weng shiyang-weng marked this pull request as ready for review August 7, 2025 02:43
Copy link
Collaborator

@Xia-Weiwen Xia-Weiwen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Have you run some benchmark to ensure it's not too slow?

@Xia-Weiwen Xia-Weiwen requested a review from jerryzh168 August 11, 2025 02:10
@shiyang-weng
Copy link
Contributor Author

@jerryzh168 Could you help review this pr

torchao/ops.py Outdated
@@ -70,6 +70,9 @@
lib.define(
"da8w4_linear_cpu(Tensor input, Tensor input_scales, Tensor input_qzeros, Tensor weight, Tensor weight_scales, Tensor weight_qzeros, Tensor compensation, Tensor? bias, ScalarType output_dtype) -> Tensor"
)
lib.define(
"qembeddingbag(Tensor qweight, Tensor indices, Tensor offsets, Tensor weight_scale, float o_scale, int mode, bool include_last_offset) -> Tensor"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jerryzh168 Thanks for reviewing. Yes, I think so, except that the implementation in this PR has limited functionality so far.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This operator is used for inference. So I did not add any parameters related to the gradient, including scale_grad_by_freq, sparse, per_sample_weights, padding_idx.

Copy link
Contributor

@jerryzh168 jerryzh168 Aug 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should add this to pytorch directly if that's the case, float8 is a native dtype in pytorch, so I think it makes most of the sense to just add the functionality there, we can error out in the op if some arg combination is not supported or invalid for float8

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Intel's platform has fp8 instructions. When we are ready, we hope to update this kernel based on fp8 instructions. As far as I know, the latest GCC is required. Is it difficult to support in PyTorch?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, since this PR adds a quantized version of this op, do you think it better to be added in Torchao rather than in torch core? Thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah my question is can this be implemented with extending the embedding_bag op in pytorch and do the scaling in torchao? or will performance be a concern here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a memory bound operator. Repeated reading and writing will lead to significant performance degradation. For example, if we originally need to read and write once(this situation will also occur many times for DLRM), we will need to read and write twice after do the scaling separately, and the performance will be reduced by half.

Copy link
Contributor

@jerryzh168 jerryzh168 Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, sounds good, maybe rename this to _scaled_embedding_bag to follow these ops: https://github.com/pytorch/pytorch/blob/31a41daff49f2cde941d8b9e35cb2eaeeb606c0d/aten/src/ATen/native/native_functions.yaml#L7135

using _ to indicating it's prototype op since you may want to update the arg list expand hardware coverage etc. later

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

test/test_ops.py Outdated
mode_enum,
include_last_offset,
).to(dtype)
torch.testing.assert_close(refe_out, test_out, atol=0, rtol=0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this too strict?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed to 1e-5

@shiyang-weng shiyang-weng changed the title [CPU][float8] Add QEmbeddingbag kernel [CPU][float8] Add scaled_embedding_bag kernel Aug 22, 2025
@shiyang-weng
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 mandatory check(s) are pending/not yet run. The first few are:

  • Facebook CLA Check

Dig deeper by viewing the pending checks on hud

Details for Dev Infra team Raised by workflow job

Failing merge rule: superuser

@jerryzh168
Copy link
Contributor

@pytorchbot merge

we just manually merge with the button in torchao

@jerryzh168
Copy link
Contributor

also is this op built by default? I think ideally it can be optional so it does not impact the normal build. we have seen some errors when some other kernels from prototype feature that breaks the torchao build

@Xia-Weiwen Xia-Weiwen merged commit 2a53216 into pytorch:main Aug 28, 2025
20 checks passed
@shiyang-weng
Copy link
Contributor Author

also is this op built by default? I think ideally it can be optional so it does not impact the normal build. we have seen some errors when some other kernels from prototype feature that breaks the torchao build

Like other kernel on cpu/*.cpp, it is not built by default and built only with USE_CPU_KERNELS=1.

andrewor14 added a commit that referenced this pull request Aug 29, 2025
andrewor14 added a commit that referenced this pull request Aug 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: new feature Use this tag if this PR adds a new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants